Goto

Collaborating Authors

 autonomous ai system


World is ill-prepared for breakthroughs in AI, say experts

The Guardian

The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts including two "godfathers" of AI, who warn that governments have made insufficient progress in regulating the technology. A shift by tech companies to autonomous systems could "massively amplify" AI's impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group. The recommendations are made by 25 experts including Geoffrey Hinton and Yoshua Bengio, two of the three "godfathers of AI" who have won the ACM Turing award – the computer science equivalent of the Nobel prize – for their work. The intervention comes as politicians, experts and tech executives prepare to meet at a two-day summit in Seoul on Tuesday. The academic paper, called "managing extreme AI risks amid rapid progress", recommends government safety frameworks that introduce tougher requirements if the technology advances rapidly.


Managing AI Risks in an Era of Rapid Progress

Bengio, Yoshua, Hinton, Geoffrey, Yao, Andrew, Song, Dawn, Abbeel, Pieter, Harari, Yuval Noah, Zhang, Ya-Qin, Xue, Lan, Shalev-Shwartz, Shai, Hadfield, Gillian, Clune, Jeff, Maharaj, Tegan, Hutter, Frank, Baydin, Atılım Güneş, McIlraith, Sheila, Gao, Qiqi, Acharya, Ashwin, Krueger, David, Dragan, Anca, Torr, Philip, Russell, Stuart, Kahneman, Daniel, Brauner, Jan, Mindermann, Sören

arXiv.org Artificial Intelligence

In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, we propose urgent priorities for AI R&D and governance.


Machine Teaching for Autonomous AI

#artificialintelligence

Just as teachers help students gain new skills, the same is true of artificial intelligence (AI). Machine learning algorithms can adapt and change, much like the learning process itself. Using the machine teaching paradigm, a subject matter expert (SME) can teach AI to improve and optimize a variety of systems and processes. The result is an autonomous AI system. In this course, you'll learn how automated systems make decisions and how to approach building an AI system that will outperform current capabilities.


The Self-Learning Model That Passed The Famous Pommerman Challenge

#artificialintelligence

I recently started an AI-focused educational newsletter, that already has over 125,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. The emergence of trends such as self-driving cars or drones has helped to popularize an area of artificial intelligence(AI) research known as autonomous agents. Conceptually, autonomous agents are AI that builds knowledge in real-time based on the characteristics of their surrounding environment as well as other agents.


How Do We Align Artificial Intelligence with Human Values? - Future of Life Institute

#artificialintelligence

A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Recently, some of the top minds in AI and related fields got together to discuss how we can ensure AI remains beneficial throughout this transition, and the result was the Asilomar AI Principles document. The intent of these 23 principles is to offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, "Of course, it's just a start. The Principles represent the beginning of a conversation, and now that the conversation is underway, we need to follow up with broad discussion about each individual principle. The Principles will mean different things to different people, and in order to benefit as much of society as possible, we need to think about each principle individually. As part of this effort, I interviewed many of the AI researchers who signed the Principles document to learn their take on why they signed and what issues still confront us. Today, we start with the Value Alignment principle. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story. When King Midas asked for everything he touched to turn to gold, he really just wanted to be rich. He didn't actually want his food and loved ones to turn to gold. We face a similar situation with artificial intelligence: how do we ensure that an AI will do what we really want, while not harming humans in a misguided attempt to do what its designer requested? "Robots aren't going to try to revolt against humanity," explains Anca Dragan, an assistant professor and colleague of Russell's at UC Berkeley, "they'll just try to optimize whatever we tell them to do.


Peer Review Has Its Shortcomings, But AI Is a Risky Fix

WIRED

Artificial intelligence is luring science into dangerous waters. To make scientific publishing more efficient, commercial publishers now rely more and more on editorial software systems. These are beginning to transform peer review from interaction between humans into interaction between humans and AI. We should think twice before allowing autonomous AI systems to decide what research warrants publication. Janne I. Hukkinen (@JIHukkinen) is professor of environmental policy at University of Helsinki, Finland, and editor of Ecological Economics.

  Country: Europe > Finland > Uusimaa > Helsinki (0.25)
  Industry: Law (0.56)

New research paper reveals the behaviors that give Google the heebiejeebies about AI

#artificialintelligence

It's hard to think of a company more infatuated with AI than Google. With multi-billion dollar investments in deep learning startups like DeepMind, and responsible for some of the biggest advances involving neural networks, Google is the greatest cheerleader artificial intelligence could possibly hope for. But that doesn't mean there aren't things about AI that scare the search giant. Related: Google researchers have plans for an AI "kill switch" In a new paper, entitled "Concrete Problems in AI Safety," Google researchers -- alongside experts from UC Berkeley and Stanford University -- lay out some of the possible "negative side effects" which may arise from AI systems over the coming years. Instead of focusing on the distant threat of superintelligence, the 29-page paper instead examines "unintended and harmful behavior that may emerge from poor design."